Incremental proximal methods for large scale convex optimization
نویسنده
چکیده
Abstract We consider the minimization of a sum Pm i=1 fi(x) consisting of a large number of convex component functions fi. For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components. We also discuss applications in a few contexts, including signal processing and inference/machine learning.
منابع مشابه
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
We survey incremental methods for minimizing a sum ∑m i=1 fi(x) consisting of a large number of convex component functions fi. Our methods consist of iterations applied to single components, and have proved very effective in practice. We introduce a unified algorithmic framework for a variety of such methods, some involving gradient and subgradient iterations, which are known, and some involvin...
متن کاملIncremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function. These upper bounds are tight at the current estimate, and each iteration monotonically drives the objective function downhill. Such a simple principle is widely applicable and has been very popular in various scientific fields, especially in signal processing and statist...
متن کاملScalable nonconvex inexact proximal splitting
We study a class of large-scale, nonsmooth, and nonconvex optimization problems. In particular, we focus on nonconvex problems with composite objectives. This class includes the extensively studied class of convex composite objective problems as a subclass. To solve composite nonconvex problems we introduce a powerful new framework based on asymptotically nonvanishing errors, avoiding the commo...
متن کاملIncremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
We present a unifying framework for nonsmooth convex minimization bringing together -subgradient algorithms and methods for the convex feasibility problem. This development is a natural step for -subgradient methods in the direction of constrained optimization since the Euclidean projection frequently required in such methods is replaced by an approximate projection, which is often easier to co...
متن کاملProximal Algorithms
This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton’s method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Math. Program.
دوره 129 شماره
صفحات -
تاریخ انتشار 2011